27 research outputs found

    BioSimMER: Virtual Reality Based Experiential Learning

    Get PDF
    Slides from a presentation given at the UNM Health Sciences Center

    A computer-based training system combining virtual reality and multimedia

    Get PDF
    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system

    An introductory VR course for undergraduates incorporating foundation, experience and capstone

    No full text
    This paper presents the structure, pedagogy and motivation for an introductory undergraduate course in Virtual Reality. The course is offered as an elective at the 400-level, hence students taking the course are juniors and seniors who have completed a substantial portion of their Computer Science curriculum. The course incorporates multiple components of VR theory and practice, including hardware and software survey and analysis, human perception, and applications. It also contains a semester-long, hands-on development component utilizing a specific virtual reality environment. In addition, because VR is a broad, multidisciplinary field of study, the course provides an ideal environment for incorporating capstone elements that allow undergraduate students to tie together many of the computing principles learned during their undergraduate academic careers. Copyright 2005 ACM

    BioSimMER: Virtual Reality Based Experiential Learning

    No full text
    Slides from a presentation given at the UNM Health Sciences Center.UNM Health Sciences Library and Informatics Cente

    Visually-guided haptic object recognition

    No full text
    Sensory capabilities are vital if a robot is to function autonomously in unknown or partially specified environments, if it is to carry out complex, roughly detailed tasks, and if it is to interact with and to learn from the world around it. Perception forms the all important interface between the cogitative organism and the world in which it must act and survive. Hence the first step toward intelligent, autonomous robots is to develop this interface--to provide robots with perceptual capabilities. This work presents a model for robotic perception. Within the framework of this model, we have developed a system which utilizes passive vision and active touch for the task of object categorization. The system is organized as a highly modularized, distributed hierarchy of domain specific and informationally encapsulated knowledge-based experts. The visual subsystem is passive and consists of a two-dimensional region analysis and a three-dimensional edge analysis. The haptic subsystem is active and consists of a set of modules which either execute exploratory procedures to extract information from the world or which combine information from lower level modules into more complex representations. We also address the issues of visually-guided haptic exploration and intersensory integration. Finally, we establish representational and reasoning paradigms for dealing with generic objects. Both representation and reasoning are feature-based. The representation includes both definitional information in the form of a hierarchy of frames and spatial/geometric information in the form of the spatial polyhedron

    Visually-guided haptic object recognition

    No full text
    Sensory capabilities are vital if a robot is to function autonomously in unknown or partially specified environments, if it is to carry out complex, roughly detailed tasks, and if it is to interact with and to learn from the world around it. Perception forms the all important interface between the cogitative organism and the world in which it must act and survive. Hence the first step toward intelligent, autonomous robots is to develop this interface--to provide robots with perceptual capabilities. This work presents a model for robotic perception. Within the framework of this model, we have developed a system which utilizes passive vision and active touch for the task of object categorization. The system is organized as a highly modularized, distributed hierarchy of domain specific and informationally encapsulated knowledge-based experts. The visual subsystem is passive and consists of a two-dimensional region analysis and a three-dimensional edge analysis. The haptic subsystem is active and consists of a set of modules which either execute exploratory procedures to extract information from the world or which combine information from lower level modules into more complex representations. We also address the issues of visually-guided haptic exploration and intersensory integration. Finally, we establish representational and reasoning paradigms for dealing with generic objects. Both representation and reasoning are feature-based. The representation includes both definitional information in the form of a hierarchy of frames and spatial/geometric information in the form of the spatial polyhedron

    A Distributed Virtual Reality Simulation System for Situational Training

    No full text

    WeeBot: A novel method for infant control of a robotic mobility device

    No full text
    A novel method for controlling a robotic mobility platform, the WeeBot, is presented. The WeeBot permits an infant seated on the robot to control its motion by leaning in the direction of desired movement. The WeeBot hardware and software are discussed and the results of a pilot feasibility study are presented. This study shows that after five training sessions typically developing infants ages six to nine months were able to demonstrate directed movement of the WeeBot. © 2012 IEEE

    Emotional and performance attributes of a VR game: A study of children

    No full text
    In this paper we present the results of a study to determine the effect and efficacy of a Virtual Reality game designed to elicit movements of the upper extremity. The study is part of an on-going research effort to explore the use of Virtual Reality as a means of improving the effectiveness of therapy for children with motor impairments. The current study addresses the following questions: 1. Does a VR game requiring repetitive motion sufficiently engage a child? 2. Are there detrimental physiological or sensory side-effects when a child uses an HMD-based VR? 3. Are the movements produced by a child while playing a VR game comparable to movements produced when carrying out a similar task in the realworld? Based on study results, the enjoyment level for the game was high. ANOVA performed on the results for physical well-being pre- and post-VR showed no overall ill-effects as perceived by the children. Playing the game did not effect proprioception based on pre- and post-VR test scores. Motion data show similar, but not identical, overall movement profiles for similar tasks performed in the real and virtual world. Motor learning occurs in both environments, as measured by time to complete a game cycle
    corecore